35 research outputs found
AudioPairBank: Towards A Large-Scale Tag-Pair-Based Audio Content Analysis
Recently, sound recognition has been used to identify sounds, such as car and
river. However, sounds have nuances that may be better described by
adjective-noun pairs such as slow car, and verb-noun pairs such as flying
insects, which are under explored. Therefore, in this work we investigate the
relation between audio content and both adjective-noun pairs and verb-noun
pairs. Due to the lack of datasets with these kinds of annotations, we
collected and processed the AudioPairBank corpus consisting of a combined total
of 1,123 pairs and over 33,000 audio files. One contribution is the previously
unavailable documentation of the challenges and implications of collecting
audio recordings with these type of labels. A second contribution is to show
the degree of correlation between the audio content and the labels through
sound recognition experiments, which yielded results of 70% accuracy, hence
also providing a performance benchmark. The results and study in this paper
encourage further exploration of the nuances in audio and are meant to
complement similar research performed on images and text in multimedia
analysis.Comment: This paper is a revised version of "AudioSentibank: Large-scale
Semantic Ontology of Acoustic Concepts for Audio Content Analysis
Experiments on the DCASE Challenge 2016: Acoustic Scene Classification and Sound Event Detection in Real Life Recording
In this paper we present our work on Task 1 Acoustic Scene Classi- fication
and Task 3 Sound Event Detection in Real Life Recordings. Among our experiments
we have low-level and high-level features, classifier optimization and other
heuristics specific to each task. Our performance for both tasks improved the
baseline from DCASE: for Task 1 we achieved an overall accuracy of 78.9%
compared to the baseline of 72.6% and for Task 3 we achieved a Segment-Based
Error Rate of 0.76 compared to the baseline of 0.91
Training Audio Captioning Models without Audio
Automated Audio Captioning (AAC) is the task of generating natural language
descriptions given an audio stream. A typical AAC system requires manually
curated training data of audio segments and corresponding text caption
annotations. The creation of these audio-caption pairs is costly, resulting in
general data scarcity for the task. In this work, we address this major
limitation and propose an approach to train AAC systems using only text. Our
approach leverages the multimodal space of contrastively trained audio-text
models, such as CLAP. During training, a decoder generates captions conditioned
on the pretrained CLAP text encoder. During inference, the text encoder is
replaced with the pretrained CLAP audio encoder. To bridge the modality gap
between text and audio embeddings, we propose the use of noise injection or a
learnable adapter, during training. We find that the proposed text-only
framework performs competitively with state-of-the-art models trained with
paired audio, showing that efficient text-to-audio transfer is possible.
Finally, we showcase both stylized audio captioning and caption enrichment
while training without audio or human-created text captions
Experiments on the DCASE Challenge 2016: Acoustic scene classification and sound event detection in real life recording
International audienceIn this paper we present our work on Task 1 Acoustic Scene Classification and Task 3 Sound Event Detection in Real Life Recordings. Among our experiments we have low-level and high-level features, classifier optimization and other heuristics specific to each task. Our performance for both tasks improved the baseline from DCASE: for Task 1 we achieved an overall accuracy of 78.9% compared to the baseline of 72.6% and for Task 3 we achieved a Segment-Based Error Rate of 0.48 compared to the baseline of 0.91